70 research outputs found

    Coding leaf exploratory tasks under 3D occupancy-based models

    Get PDF
    Resumen del trabajo presentado al seminario celebrado en el Instituto de Robótica e Informática Industrial (IRII-CSIC-UPC) el 8 de mayo de 2014.Peer Reviewe

    On inferring intentions in shared tasks for industrial collaborative robots

    Get PDF
    Inferring human operators' actions in shared collaborative tasks, plays a crucial role in enhancing the cognitive capabilities of industrial robots. In all these incipient collaborative robotic applications, humans and robots not only should share space but also forces and the execution of a task. In this article, we present a robotic system which is able to identify different human's intentions and to adapt its behavior consequently, only by means of force data. In order to accomplish this aim, three major contributions are presented: (a) force-based operator's intent recognition, (b) force-based dataset of physical human-robot interaction and (c) validation of the whole system in a scenario inspired by a realistic industrial application. This work is an important step towards a more natural and user-friendly manner of physical human-robot interaction in scenarios where humans and robots collaborate in the accomplishment of a task.Peer ReviewedPostprint (published version

    Task-oriented viewpoint planning for free-form objects

    Get PDF
    A thesis submitted to the Universitat Politècnica de Catalunya to obtain the degree of Doctor of Philosophy. Doctoral programme: Automatic Control, Robotics and Computer Vision. This thesis was completed at: Institut de Robòtica i Informàtica Industrial, CSIC-UPC.[EN]: This thesis deals with active sensing and its use in real exploration tasks under both scene ambiguities and measurement uncertainties. While object modeling is the implicit objective of most of active sensing algorithms, in this work we have explored new strategies to deal with more generic and more complex tasks. Active sensing requires the ability of moving the perceptual system to gather new information. Our approach uses a robot manipulator with a 3D Time-of-Flight (ToF) camera attached to the end-effector. For a complex task, we have focused our attention on plant phenotyping. Plants are complex objects, with leaves that change their position and size along time. Valid viewpoints for a certain plant are hardly valid for a different one, even belonging to the same species. Some instruments, such as chlorophyll meters or disk sampling tools, require being precisely positioned over a particular location of the leaf. Therefore, their use requires the modeling of specific regions of interest of the plant, including also the free space needed for avoiding obstacles and approaching the leaf with tool. It is easy to observe that predefined camera trajectories are not valid here, and that usually with one single view it is very difficult to acquire all the required information. The overall objective of this thesis is to solve complex active sensing tasks by embedding their exploratory goal into a pre-estimated geometrical model, using information-gain as the fundamental guideline for the reward function. The main contributions can be divided in two groups: first, the evaluation of ToF cameras and their calibration to assess the uncertainty of the measurements (presented in Part I); and second, the proposal of a framework capable of embedding the task, modeled as free and occupied space, and that takes into account the modeled sensor's uncertainty to improve the action selection algorithm (presented in Part II). This thesishas given rise to 14 publications, including 5 indexed journals, and its results have been used in the GARNICS European project. The complete framework is based on the Next-Best-View methodology and it can be summarized in the following main steps. First, an initial view of the object (e.g., a plant) is acquired. From this initial view and given a set of candidate viewpoints, the expected gain obtained by moving the robot and acquiring the next image is computed. This computation takes into account the uncertainty from all the different pixels of the sensor, the expected information based on a predefined task model, and the possible occlusions. Once the most promising view is selected, the robot moves, takes a new image, integrates this information intothe model, and evaluates again the set of remaining views. Finally, the task terminates when enough information is gathered. In our examples, this process enables the robot to perform a measurement on top of a leaf. The key ingredient is to model the complexity of the task in a layered representation of free-occupied occupancy grid maps. This allows to naturally encode the requirements of the task, to maintain and update the belief state with the measurements performed, to simulate and compute the expected gains of all potential viewpoints, and to encode the termination condition. During this work the technology of ToF cameras has incredibly evolved. Nowadays it is very popular and ToF cameras are already embedded in some consumer devices. Although the quality of the measurements has been considerably improved, it is still not uniform in the sensor. We believe, as it has been demonstrated in various experiments in this work, that a careful modeling of the sensor's uncertainty is highly beneficial and helps to design better decision systems. In our case, it enables a more realistic computation of the information gain measure, and consequently, a better selection criterion.[CA]: Aquesta tesi aborda el tema de la percepció activa i el seu ús en tasques d'exploració en entorns reals tot considerant la ambigüitat en l'escena i la incertesa del sistema de percepció. Al contrari de la majoria d'algoritmes de percepció activa, on el modelatge d'objectes sol ser l'objectiu implícit, en aquesta tesi hem explorat noves estratègies per poder tractar tasques genèriques i de major complexitat. Tot sistema de percepció activa requereix un aparell sensorial amb la capacitat de variar els seus paràmetres de forma controlada, per poder, d'aquesta manera, recopilar nova informació per resoldre una tasca determinada. En tasques d'exploració, la posició i orientació del sensor són paràmetres claus per resoldre la tasca. En el nostre estudi hem fet ús d'un robot manipulador com a sistema de posicionament i d'una càmera de profunditat de temps de vol (ToF), adherida al seu efector final, com a sistema de percepció. Com a tasca final, ens hem concentrat en l'adquisició de mesures sobre fulles dins de l'àmbit del fenotipatge de les plantes. Les plantes son objectes molt complexos, amb fulles que canvien de textura, posició i mida al llarg del temps. Això comporta diverses dificultats. Per una banda, abans de dur a terme una mesura sobre un fulla s'ha d'explorar l'entorn i trobar una regió que ho permeti. A més a més, aquells punts de vista que han estat adequats per una determinada planta difícilment ho seran per una altra, tot i sent les dues de la mateixa espècie. Per un altra banda, en el moment de la mesura, certs instruments, tals com els mesuradors de clorofil·la o les eines d'extracció de mostres, requereixen ser posicionats amb molta precisió. És necessari, doncs, disposar d'un model detallat d'aquestes regions d'interès, i que inclogui no només l'espai ocupat sinó també el lliure. Gràcies a la modelització de l'espai lliure es pot dur a terme una bona evitació d'obstacles i un bon càlcul de la trajectòria d'aproximació de l'eina a la fulla. En aquest context, és fàcil veure que, en general, amb un sol punt de vistano n'hi haprou per adquirir tota la informació necessària per prendre una mesura, i que l'ús de trajectòries predeterminades no garanteixen l'èxit. L'objectiu general d'aquesta tesi és resoldre tasques complexes de percepció activa mitjançant la codificació del seu objectiu d'exploració en un model geomètric prèviament estimat, fent servir el guany d'informació com a guia fonamental dins de la funció de cost. Les principals contribucions d'aquesta tesi es poden dividir en dos grups: primer, l'avaluació de les càmeres ToF i el seu calibratge per poder avaluar la incertesa de les seves mesures (presentat en la Part I); i en segon lloc, la proposta d'un sistema capaç de codificar la tasca mitjançant el modelatge de l'espai lliure i ocupat, i que té en compte la incertesa del sensor per millorar la selecció de les accions (presentat en la Part II). Aquesta tesi ha donat lloc a 14 publicacions, incloent 5 en revistes indexades, i els resultats obtinguts s'han fet servir en el projecte Europeu GARNICS. La funcionalitat del sistema complet està basada en els mètodes Next-Best-View (següent-millor-vista) i es pot desglossar en els següents passos principals. En primer lloc, s'obté una vista inicial de l'objecte (p. ex., una planta). A partir d'aquesta vista inicial i d'un conjunt de vistes candidates, s'estima, per cada una d'elles, el guany d'informació resultant, tant de moure la càmera com d'obtenir una nova mesura. És rellevant dir que aquest càlcul té en compte la incertesa de cada un dels píxels del sensor, l'estimació de la informació basada en el model de la tasca preestablerta i les possibles oclusions. Un cop seleccionada la vista més prometedora, el robot es mou a la nova posició, pren una nova imatge, integra aquesta informació en el model i torna a avaluar, un altre cop, el conjunt de punts de vista restants. Per últim, la tasca acaba en el moment que es recopila suficient informació.This work has been partially supported by a JAE fellowship of the Spanish Scientific Research Council (CSIC), the Spanish Ministry of Science and Innovation, the Catalan Research Commission and the European Commission under the research projects: DPI2008-06022: PAU: Percepción y acción ante incertidumbre. DPI2011-27510: PAU+: Perception and Action in Robotics Problems with Large State Spaces. 201350E102: MANIPlus: Manipulación robotizada de objetos deformables. 2009-SGR-155: SGR ROBÒTICA: Grup de recerca consolidat - Grup de Robòtica. FP6-2004-IST-4-27657: EU PACO PLUS project. FP7-ICT-2009-4-247947: GARNICS: Gardening with a cognitive system. FP7-ICT-2009-6-269959: IntellAct: Intelligent observation and execution of Actions and manipulations.Peer Reviewe

    Using ToF and RGBD cameras for 3D robot perception and manipulation in human environments

    Get PDF
    Robots, traditionally confined into factories, are nowadays moving to domestic and assistive environments, where they need to deal with complex object shapes, deformable materials, and pose uncertainties at human pace. To attain quick 3D perception, new cameras delivering registered depth and intensity images at a high frame rate hold a lot of promise, and therefore many robotics researchers are now experimenting with structured-light RGBD and Time-of-Flight (ToF) cameras. In this paper both technologies are critically compared to help researchers to evaluate their use in real robots. The focus is on 3D perception at close distances for different types of objects that may be handled by a robot in a human environment. We review three robotics applications. The analysis of several performance aspects indicates the complementarity of the two camera types, since the user-friendliness and higher resolution of RGBD cameras is counterbalanced by the capability of ToF cameras to operate outdoors and perceive details.This research is partially funded by the EU GARNICS project FP7-247947, by CSIC project MANIPlus 201350E102, by the Spanish Ministry of Science and Innovation under project PAU+DPI2011-27510, and the Catalan Research Commission under Grant SGR-155.Peer Reviewe

    ToF cameras for eye-in-hand robotics

    Get PDF
    This work was supported by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, by the EU Project IntellAct FP7-ICT2009-6-269959 and by the Catalan Research Commission through SGR-00155.Peer Reviewe

    3D sensor planning framework for leaf probing

    Get PDF
    Trabajo presentado a la International Conference on Intelligent Robots and Systems celebrada en Hamburgo (Alemania) del 28 de septiembre al 2 de octubre de 2015.Modern plant phenotyping requires active sensing technologies and particular exploration strategies. This article proposes a new method for actively exploring a 3D region of space with the aim of localizing special areas of interest for manipulation tasks over plants. In our method, exploration is guided by a multi-layer occupancy grid map. This map, together with a multiple-view estimator and a maximum-information-gain gathering approach, incrementally provides a better understanding of the scene until a task termination criterion is reached. This approach is designed to be applicable for any task entailing 3D object exploration where some previous knowledge of its general shape is available. Its suitability is demonstrated here for an eye-in-hand arm configuration in a leaf probing application.This research has been partially funded by the CSIC project MANIPlus 201350E102.Peer Reviewe

    ToF cameras for active vision in robotics

    Get PDF
    ToF cameras are now a mature technology that is widely being adopted to provide sensory input to robotic applications. Depending on the nature of the objects to be perceived and the viewing distance, we distinguish two groups of applications: those requiring to capture the whole scene and those centered on an object. It will be demonstrated that it is in this last group of applications, in which the robot has to locate and possibly manipulate an object, where the distinctive characteristics of ToF cameras can be better exploited. After presenting the physical sensor features and the calibration requirements of such cameras, we review some representative works highlighting for each one which of the distinctive ToF characteristics have been more essential. Even if at low resolution, the acquisition of 3D images at frame-rate is one of the most important features, as it enables quick background/ foreground segmentation. A common use is in combination with classical color cameras. We present three developed applications, using a mobile robot and a robotic arm, to exemplify with real images some of the stated advantages.This work was supported by the EU project GARNICS FP7-247947, by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, and by the Catalan Research Commission through SGR-00155Peer Reviewe

    Segmenting color images into surface patches by exploiting sparse depth data

    Get PDF
    Trabajo presentado al WACV 2011 celebrado en Kona (USA) del 5 al 7 de enero.We present a new method for segmenting color images into their composite surfaces by combining color segmentation with model-based fitting utilizing sparse depth data, acquired using time-of-flight (Swissranger, PMD CamCube) and stereo techniques. The main target of our work is the segmentation of plant structures, i.e., leaves, from color-depth images, and the extraction of color and 3D shape information for automating manipulation tasks. Since segmentation is performed in the dense color space, even sparse, incomplete, or noisy depth information can be used. This kind of data often represents a major challenge for methods operating in the 3D data space directly. To achieve our goal, we construct a three-stage segmentation hierarchy by segmenting the color image with different resolutions - assuming that ``true'' surface boundaries must appear at some point along the segmentation hierarchy. 3D surfaces are then fitted to the color-segment areas using depth data. Those segments which minimize the fitting error are selected and used to construct a new segmentation. Then, an additional region merging and a growing stage are applied to avoid over-segmentation and label previously unclustered points. Experimental results demonstrate that the method is successful in segmenting a variety of domestic objects and plants into quadratic surfaces. At the end of the procedure, the sparse depth data is completed using the extracted surface models, resulting in dense depth maps. For stereo, the resulting disparity maps are compared with ground truth and the average error is computed.This research is partially funded by the EU GARNICS project FP7-247947, the Consolider-Ingenio project CSD2007-00018, and the Catalan Research Commission under 2009SGR155. G. Alenyà and S. Foix were supported by CSIC under a Jae-Doc and Jae-Pre-Doc fellowship, respectively. B. Dellen acknowledges support from the Spanish Ministry for Science and Innovation via a Ramon y Cajal fellowship.Peer Reviewe

    Task-driven active sensing framework applied to leaf probing

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This article presents a new method for actively exploring a 3D workspace with the aim of localizing relevant regions for a given task. Our method encodes the exploration route in a multi-layer occupancy grid map. This map, together with a multiple-view estimator and a maximum-information-gain gathering approach, incrementally provide a better understanding of the scene until reaching the task termination criterion. This approach is designed to be applicable to any task entailing 3D object exploration where some previous knowledge of its approximate shape is available. Its suitability is demonstrated here for a leaf probing task using an eye-in-hand arm configuration in the context of a phenotyping application (leaf probing).Peer ReviewedPostprint (author's final draft

    Robotic leaf probing via segmentation of range data into surface patches

    Get PDF
    Presentado al International Conference on Intelligent Robots and Systems (IROS AGROBOTICS), Workshop on Agricultural Robotics: Enabling Safe, Efficient, Affordable Robots for Food Production celebrado en Portugal del 7 al 12 de octubre de 2012.We present a novel method for the robotized probing of plant leaves using Time-of-Flight (ToF) sensors. Plant images are segmented into surface patches by combining a segmentation of the infrared intensity image, provided by the ToF camera, with quadratic surface fitting using ToF depth data. Leaf models are fitted to the boundaries of the segments and used to determine probing points and to evaluate the suitability of leaves for being sampled. The robustness of the approach is evaluated by repeatedly placing an especially adapted, robot-mounted spad meter on the probing points which are extracted in an automatic manner. The number of successful chlorophyll measurements is counted, and the total time for processing the visual data and probing the plant with the robot is measured for each trial. In case of failure, the underlying causes are determined and reported, allowing a better assessment of the applicability of the method in real scenarios.This research is partially funded by the EU GARNICS project FP7-247947, by the Spanish Ministry of Science and Innovation under projects PAU+ and MIPRCV Consolider Ingenio CSD2007-00018, and the Catalan Research Commission. B. Dellen acknowledges support from the Spanish Ministry for Science and Innovation via a Ramon y Cajal program. S. Foix is supported by PhD fellowship from CSIC’s JAE program.Peer Reviewe
    corecore